47 research outputs found

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    Method of Classification for Multisource Data in Remote Sensing Based on Interval-VaIued Probabilities

    Get PDF
    This work was supported by NASA Grant No. NAGW-925 “Earth Observation Research - Using Multistage EOS-Iike Data” (Principal lnvestigators: David A. Landgrebe and Chris Johannsen). The Anderson River SAR/MSS data set was acquired, preprocessed, and loaned to us by the Canada Centre for Remote Sensing, Department of Energy Mines, and Resources, of the Government of Canada. The importance of utilizing multisource data in ground-cover^ classification lies in the fact that improvements in classification accuracy can be achieved at the expense of additional independent features provided by separate sensors. However, it should be recognized that information and knowledge from most available data sources in the real world are neither certain nor complete. We refer to such a body of uncertain, incomplete, and sometimes inconsistent information as “evidential information.” The objective of this research is to develop a mathematical framework within which various applications can be made with multisource data in remote sensing and geographic information systems. The methodology described in this report has evolved from “evidential reasoning,” where each data source is considered as providing a body of evidence with a certain degree of belief. The degrees of belief based on the body of evidence are represented by “interval-valued (IV) probabilities” rather than by conventional point-valued probabilities so that uncertainty can be embedded in the measures. There are three fundamental problems in the muItisource data analysis based on IV probabilities: (1) how to represent bodies of evidence by IV probabilities, (2) how to combine IV probabilities to give an overall assessment of the combined body of evidence, and (3) how to make a decision when the statistical evidence is given by IV probabilities. This report first introduces an axiomatic approach to IV probabilities, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach the report focuses on representation of statistical evidence by IV probabilities and combination of multiple bodies of evidence. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. This report also focuses on the development of decision rules over IV probabilities from the viewpoint of statistical pattern recognition The proposed method, so called “evidential reasoning” method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data* Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor, in each case, a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a Divide-and-Combine process, the method is able to utilize more features than the conventional Maximum Likelihood method

    Edge Device Deployment of Multi-Tasking Network for Self-Driving Operations

    Full text link
    A safe and robust autonomous driving system relies on accurate perception of the environment for application-oriented scenarios. This paper proposes deployment of the three most crucial tasks (i.e., object detection, drivable area segmentation and lane detection tasks) on embedded system for self-driving operations. To achieve this research objective, multi-tasking network is utilized with a simple encoder-decoder architecture. Comprehensive and extensive comparisons for two models based on different backbone networks are performed. All training experiments are performed on server while Nvidia Jetson Xavier NX is chosen as deployment device.Comment: arXiv admin note: text overlap with arXiv:1908.08926 by other author

    Real-Time Action Recognition Using Multi-level Action Descriptor and DNN

    Get PDF
    This work presents a novel approach to the problem of real-time human action recognition in intelligent video surveillance. For more efficient and precise labeling of an action, this work proposes a multilevel action descriptor, which delivers complete information of human actions. The action descriptor consists of three levels: posture, locomotion, and gesture level; each of which corresponds to a different group of subactions describing a single human action, for example, smoking while walking. The proposed action recognition method is able to localize and recognize simultaneously the actions of multiple individuals using appearance-based temporal features with multiple convolutional neural networks (CNN). Although appearance cues have been successfully exploited for visual recognition problems, appearance, motion history, and their combined cues with multi-CNNs have not yet been explored. Additionally, the first systematic estimation of several hyperparameters for shape and motion history cues is investigated. The proposed approach achieves a mean average precision (mAP) of 73.2% in the frame-based evaluation over the newly collected large-scale ICVL video dataset. The action recognition model can run at around 25 frames per second, which is suitable for real-time surveillance applications

    A critical review on computer vision and artificial intelligence in food industry

    Get PDF
    Emerging technologies such as computer vision and Artificial Intelligence (AI) are estimated to leverage the accessibility of big data for active training and yielding operational real time smart machines and predictable models. This phenomenon of applying vision and learning methods for the improvement of food industry is termed as computer vision and AI driven food industry. This review contributes to provide an insight into state-of-the-art AI and computer vision technologies that can assist farmers in agriculture and food processing. This paper investigates various scenarios and use cases of machine learning, machine vision and deep learning in global perspective with the lens of sustainability. It explains the increasing demand towards the AgTech industry using computer vision and AI which might be a path towards sustainable food production to feed the future. Also, this review tosses some implications regarding challenges and recommendations in inclusion of technologies in real time farming, substantial global policies and investments. Finally, the paper discusses the possibility of using Fourth Industrial Revolution [4.0 IR] technologies such as deep learning and computer vision robotics as a key for sustainable food production
    corecore